50 research outputs found

    Algorithms for generalized potential games with mixed-integer variables

    Get PDF
    We consider generalized potential games, that constitute a fundamental subclass of generalized Nash equilibrium problems. We propose different methods to compute solutions of generalized potential games with mixed-integer variables, i.e., games in which some variables are continuous while the others are discrete. We investigate which types of equilibria of the game can be computed by minimizing a potential function over the common feasible set. In particular, for a wide class of generalized potential games, we characterize those equilibria that can be computed by minimizing potential functions as Pareto solutions of a particular multi-objective problem, and we show how different potential functions can be used to select equilibria. We propose a new Gauss–Southwell algorithm to compute approximate equilibria of any generalized potential game with mixed-integer variables. We show that this method converges in a finite number of steps and we also give an upper bound on this number of steps. Moreover, we make a thorough analysis on the behaviour of approximate equilibria with respect to exact ones. Finally, we make many numerical experiments to show the viability of the proposed approaches

    Computing all solutions of Nash equilibrium problems with discrete strategy sets

    Full text link
    The Nash equilibrium problem is a widely used tool to model non-cooperative games. Many solution methods have been proposed in the literature to compute solutions of Nash equilibrium problems with continuous strategy sets, but, besides some specific methods for some particular applications, there are no general algorithms to compute solutions of Nash equilibrium problems in which the strategy set of each player is assumed to be discrete. We define a branching method to compute the whole solution set of Nash equilibrium problems with discrete strategy sets. This method is equipped with a procedure that, by fixing variables, effectively prunes the branches of the search tree. Furthermore, we propose a preliminary procedure that by shrinking the feasible set improves the performances of the branching method when tackling a particular class of problems. Moreover, we prove existence of equilibria and we propose an extremely fast Jacobi-type method which leads to one equilibrium for a new class of Nash equilibrium problems with discrete strategy sets. Our numerical results show that all proposed algorithms work very well in practice

    Computing equilibria of Cournot oligopoly models with mixed-integer quantities

    Get PDF
    We consider Cournot oligopoly models in which some variables represent indivisible quantities. These models can be addressed by computing equilibria of Nash equilibrium problems in which the players solve mixed-integer nonlinear problems. In the literature there are no methods to compute equilibria of this type of Nash games. We propose a Jacobi-type method for computing solutions of Nash equilibrium problems with mixed-integer variables. This algorithm is a generalization of a recently proposed method for the solution of discrete so-called “2-groups partitionable” Nash equilibrium problems. We prove that our algorithm converges in a finite number of iterations to approximate equilibria under reasonable conditions. Moreover, we give conditions for the existence of approximate equilibria. Finally, we give numerical results to show the effectiveness of the proposed method

    Solution methods for quasi variational inequalities

    Get PDF
    We propose to solve a general quasi-variational inequality by using its Karush-Kuhn-Tucker conditions. To this end we use a globally convergent algorithm based on a potential reduction approach. We establish global convergence results for many interesting instances of quasi-variational inequalities, vastly broadening the class of problems that can be solved with theoretical guarantees. Our numerical testings are very promising and show the practical viability of the approach

    Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

    Get PDF
    We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementations. Extensions of the theoretical results to general linearly constrained problem are provided. We included numerical results on support vector machines with the aim of showing the viability and the effectiveness of the proposed scheme

    Parallel Selective Algorithms for Big Data Optimization

    Full text link
    We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.Comment: This work is an extended version of the conference paper that has been presented at IEEE ICASSP'14. The first and the second author contributed equally to the paper. This revised version contains new numerical results on non convex quadratic problem

    Flexible Parallel Algorithms for Big Data Optimization

    Full text link
    We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term is typically used to enforce structure in the solution as, for example, in Lasso problems. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (Southwell-type) ones, as well as virtually all possibilities in between (e.g., gradient- or Newton-type methods) with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results show that the new method compares favorably to existing algorithms.Comment: submitted to IEEE ICASSP 201

    Inertial projection-type methods for solving quasi-variational inequalities in real Hilbert spaces

    Get PDF
    In this paper, we introduce an inertial projection-type method with different updating strategies for solving quasi-variational inequalities with strongly monotone and Lipschitz continuous operators in real Hilbert spaces. Under standard assumptions, we establish different strong convergence results for the proposed algorithm. Primary numerical experiments demonstrate the potential applicability of our scheme compared with some related methods in the literature
    corecore